% scribe: Lauren Deason % lastupdate: Oct. 2, 2005 % lecture: 4 % title: Product spaces and independence % references: Durrett, section 1.4 % keywords: product space, product measure, Fubini's theorem, projection map, independence, joint distribution, product sigma-field, % end \documentclass[12pt,letterpaper]{article} \include{macros} \newcommand{\FF}{\mathcal F} \newcommand{\CC}{\mathcal C} \begin{document} \lecture{4}{Product spaces and independence}{Lauren Deason} {laurend@math.berkeley.edu} \section{Product spaces and Fubini's Theorem} % keywords: product space, product measure, Fubini's theorem, projection map, product sigma-field % end \begin{definition} If $(\Omega_i,\FF_i)$ are measurable spaces, $i\in I$ (index set), form $\prod_i\Omega_i$. For simplicity, $\Omega_i=\Omega_1$. $\prod_i\Omega_i$ (write $\Omega$ for this) is the space of all maps: $I\to \Omega_1$. For $\omega\in\prod_i\Omega_i$, $\omega=(\omega_i: i\in I, \omega_i\in\Omega_i)$. $\Omega$ is equipped with \emph{projections}, $X_i:\Omega\to\Omega_i$, $X_i(\omega)=\omega_i$. \end{definition} [picture of square, $\Omega_1$ on one side, $\Omega_2$ on other, point $\omega$ in middle, maps under projection to each side] \begin{definition} A product $\sigma$-field on $\Omega$ is that generated by the projections: $\FF=\sigma\left((X_i\in F_i)\mid F_i\in\FF_i\right)$. $F_1\times F_2 = (\Omega\mid \Omega_1\in F_1\land\Omega_2\in F_2)\in\FF = (X_1\in F_1)\cap(X_2\in F_2)$. \end{definition} We now seek to construct the \emph{product measure}. We start with the $n=2$ case. $(\Omega,\FF)=(\Omega_1,\FF_1)\times(\Omega_2,\FF_2)$. Suppose we have probability measures $P_i$ on $(\Omega_i,\FF_i)$, $i=1,2$. Then there is a natural way to construct $P=P_1\times P_2$ (product measure) on $(\Omega,\FF)$. \emph{Key idea:} $F_1\in\FF_1$, $F_2\in\FF_2$, $P(F_1\times F_2)=P_1(F_1)\times P_2(F_2)$. \begin{theorem} [Existence of product measure and Fubini theorem]: (notation $\omega=(\omega_1,\omega_2)$) There is a unique probability measure $P$ on $(\Omega_1,\FF_1)\times(\Omega_2,\FF_2)$ such that for every non-negative product-measurable [measurable w.r.t. the product $\sigma$-field] function $f:\Omega_1\times\Omega_2\to[0,\infty)$, \begin{align*} &\int_{\Omega_1\times\Omega_2} f(\omega_1\times\omega_2)P(d\omega) = \int_{\Omega_1}\left[\int_{\Omega_2} f(\omega_1,\omega_2)P_2(d\omega_2)\right]P_1(d\omega_1) = \\ &\int_{\Omega_2}\left[\int_{\Omega_1} f(\omega_1,\omega_2)P_1(d\omega_1)\right]P_2(d\omega_2). \end{align*} \end{theorem} Note that this really tells very explicitly what $P$ is: \begin{equation*} \tag{$*$}\label{star} P(A) = \int_{\Omega_2} 1_A(\omega)P(d\omega) \end{equation*} Fix $\omega_2$, look at $A\omega_2=\{\omega_1\mid (\omega_1,\omega_2)\in A\}$. \begin{equation*} \int P_1(A\omega_2)P_2(d\omega_2) \end{equation*} Look at the level of sets. $f=1_A$. Look at formula \eqref{star}. Look at the collection $\CC$ of all $A\subset\Omega_1\times\Omega_2$ such that $*$ makes sense, i.e.: \begin{enumerate} \item $A\omega_2\in\FF_1$ for every $\omega_2\in\Omega_2$. \item $\omega_2\to P_1(A\omega_1)$ is measurable. \end{enumerate} Observe: \begin{enumerate} \item $\CC$ contains all $A_1\times A_2$ ($\pi$-system, closed under intersection), and get $P(A_1\times A_2)=P_1(A_1)P_2(A_2)$ \item $\CC$ is a $\lambda$-system. \end{enumerate} Thus, $\CC\supset\sigma(A_1\times A_2)$, which is the product of $\sigma$-fields. (Checking $\lambda$-system uses monotone convergence theorem.) We know that the order of integration was not relevant, because the other way gives us a measure which agrees on rectangles (by commutativity of addition), and rectangles generate (and we get a $\pi$-system out of them), so we can apply the $\pi$-$\lambda$-theorem. Extend our measure on indicator functions to simple functions, additively, and so on. \section{Independence} % keywords: independence, joint distribution % end Random variables $X_1$ and $X_2$ with values in $(\Omega_1,\FF_1)$, $(\Omega_2,\FF_2)$ are called \emph{independent} iff $\P(X_1\in F_1,X_2\in F_2)=\P(X_1\in F_1)\P(X_2\in F_2)$ for all $F_1\in\FF_1$, $F_2\in \FF_2$. Observe: \begin{enumerate} \item If we take $\Omega=\Omega_1\times\Omega_2$ and $\FF=\FF_1\times\FF_2$ and $\P=P_1\times P_2$, and $X_i(\omega)=\omega_i$ projections as before, then $X_1$ and $X_2$ are independent random variables with distributions $P_1$ and $P_2$. \item If $X_1$ and $X_2$ are independent random variables, defined on any background space $(\Omega,\FF)$, then the \emph{joint distribution} of $(X_1,X_2)$ is the product measure $P_1\times P_2$, $P_i=P_{X_i}$ which is the $\P$ distribution of $X_i$. \end{enumerate} $\omega\to X_1(\omega)$, $\omega\to X_2(\omega)$. Look at $\omega\to(X_1(\omega),X_2(\omega))$ as a map from $\Omega$ to $\Omega_1\times \Omega_2$. We check that this map is product-measurable. If $A_1\in\FF_1$ and $A_2\in\FF_2$, then $$\left\{(X_1,X_2)\in A_1\times A_2\right\} = (X_1\in A_1)\cap(X_2\in A_2)\in\FF.$$ Now $P_{(X_1,X_2)}=$ $\P$ distribution of $(X_1,X_2)$ makes sense. \begin{corollary}[Fubini] If $X_1$ and $X_2$ are independent random variables, \begin{equation*} \E\left[f(X_1,X_2)\right]=\int\left[\int f(x_1,x_2)P_1(dx_1)\right]P_2(dx_2). \end{equation*} \end{corollary} This justifies formulas for distribution of $X_1+X_2$, $X_1X_2$ for real r.v.'s. Example: If $X_1$ has density $f_1$, $X_2$ has density $f_2$, $\P(X_1\in A)=\int_{A_1} f(x_1)dx_1$. Then $X_1+X_2$ has a density $$f(z)=(f_1\times f_2)(z) = \int_{\R} f_1(x)f_2(z-x)dx.$$ Check this by application of Fubini theorem. $\E[g(X_1+X_2)]=\int g(z)f(z)dz.$ \emph{Useful fact:} For real random variables, $X_1$ and $X_2$ are independent if and only if $\P(X_1\le x_1,X_2\le x_2)=\P(X_1\le x_1)\P(X_2\le x_2)$ for all real $x_1,x_2$. Fix $F_1=(-\infty,x_1]$ first. Consider all sets with $\P(X_1\in F_1,X_2\in F_2)=\P(X_1\in F_1)\P(X_2\in F_2)$. Extend to $n$ variables. $X_1,\ldots,X_n$ are independent iff $$\P\left(\bigcap_i(X_i\in F_i)\right) = \prod_i \P(X_i\in F_i).$$ Same discussion with product spaces. Most proofs work by induction on $n$, reduces to $n=2$. For example, $X_1,X_2,X_3$ are independent iff $X_1$ and $X_2$ are independent and $(X_1,X_2)$ and $X_3$ are independent. Intuitive properties: e.g. if $X_1,\ldots,X_5$ are independent, then $X_1+X_3+X_5$ and $X_2+X_4$ are independent. To check this, we need to check that we can factor the probabilities as above, which goes for simple functions, and then extends. \begin{proposition} If $X$ and $Y$ are independent and $\E(|X|)<\infty$ and $\E(|Y|)<\infty$ then $\E(XY)=\E(X)\E(Y)$. \end{proposition} \begin{proof} Use Fubini. Be careful: we only showed Fubini for non-negative functions. First, check in case $X\ge 0$, $Y\ge 0$. $\E(XY)=\int_{\R}\int_{\R} xy\P(X\in dx)\P(Y\in dy)=\E(X)\E(Y)$, by Fubini. In general, $X=X^+-X^-$, $Y=Y^+-Y^-$, and get four pieces, then put back together. \end{proof} Note that the most general form of Fubini's theorem gives $$\E(f(X,Y)) = \int\int f$$ provided this $\int\int f$ formula is finite when $f$ is replaced by $|f|$.] Recall $\E(X+Y)=\E(X)+\E(Y)$ always, provided they are finite. $\var(X+Y)=\var(X)+\var(Y)+2\E\left[(X-\E(X))(Y-\E(Y))\right]$ (last term is \emph{covariance}), and last term is $0$ if $X$ and $Y$ are independent. \end{document}